Substantial amounts of resources are usually required to robustly develop a language model for an open vocabulary\nspeech recognition system as out-of-vocabulary (OOV) words can hurt recognition accuracy. In this work, we applied\na hybrid lexicon of word and sub-word units to resolve the problem of OOV words in a resource-efficient way. As\nsub-lexical units can be combined to form new words, a compact set of hybrid vocabulary can be used while still\nmaintaining a low OOV rate. For Thai, a syllable-based unit called pseudo-morpheme (PM) was chosen as a sub-word\nunit. To also benefit from different levels of linguistic information embedded in different input types, a hybrid\nrecurrent neural network language model (RNNLM) framework is proposed. An RNNLM can model not only\ninformation from multiple-type input units through a hybrid input vector of words and PMs, but can also capture long\ncontext history through recurrent connections. Several hybrid input representations were also explored to optimize\nboth recognition accuracy and computational time. The hybrid LM has shown to be both resource-efficient and\nwell-performed on two Thai LVCSR tasks: broadcast news transcription and speech-to-speech translation. The\nproposed hybrid lexicon can constitute an open vocabulary for Thai LVCSR as it can greatly reduce the OOV rate to\nless than 1 % while using only 42 % of the vocabulary size of the word-based lexicon. In terms of recognition\nperformance, the best proposed hybrid RNNLM, which uses a mixed word-PM input, obtained 1.54 % relative WER\nreduction when compared with a conventional word-based RNNLM. In terms of computational time, the best hybrid\nRNNLM has the lowest training and decoding time among all RNNLMs including the word-based RNNLM. The overall\nrelative reduction on WER of the proposed hybrid RNNLM over a traditional n-gram model is 6.91 %.
Loading....